720 research outputs found

    Translations on a context free grammar

    Get PDF
    Two schemes for the specification of translations on a context-free grammar are proposed. The first scheme, called a generalized syntax directed translation (GSDT), consists of a context free grammar with a set of semantic rules associated with each production of the grammar. In a GSDT an input word is parsed according to the underlying context free grammar, and at each node of the tree, a finite number of translation strings are computed in terms of the translation strings defined at the descendants of that node. The functional relationship between the length of input and length of output for translations defined by GSDT's is investigated.The second method for the specification of translations is in terms of tree automata—finite automata with output, walking on derivation trees of a context free grammar. It is shown that tree automata provide an exact characterization for those GSDT's with a linear relationship between input and output length

    Time and tape complexity of pushdown automaton languages

    Get PDF
    An algorithm is presented which will determine whether any string w in Σ*, of length n, is contained in a language L ⊆ Σ* defined by a two-way nondeterministic pushdown automation. This algorithm requires time n3 when implemented on a random access computer. It requires n4 time and n2 tape when implemented on a multitape Turing machine.If the pushdown automaton is deterministic, the algorithm requires n2 time on a random access computer and n2 log n time on a multitape Turing machine

    Succinct Dictionary Matching With No Slowdown

    Full text link
    The problem of dictionary matching is a classical problem in string matching: given a set S of d strings of total length n characters over an (not necessarily constant) alphabet of size sigma, build a data structure so that we can match in a any text T all occurrences of strings belonging to S. The classical solution for this problem is the Aho-Corasick automaton which finds all occ occurrences in a text T in time O(|T| + occ) using a data structure that occupies O(m log m) bits of space where m <= n + 1 is the number of states in the automaton. In this paper we show that the Aho-Corasick automaton can be represented in just m(log sigma + O(1)) + O(d log(n/d)) bits of space while still maintaining the ability to answer to queries in O(|T| + occ) time. To the best of our knowledge, the currently fastest succinct data structure for the dictionary matching problem uses space O(n log sigma) while answering queries in O(|T|log log n + occ) time. In this paper we also show how the space occupancy can be reduced to m(H0 + O(1)) + O(d log(n/d)) where H0 is the empirical entropy of the characters appearing in the trie representation of the set S, provided that sigma < m^epsilon for any constant 0 < epsilon < 1. The query time remains unchanged.Comment: Corrected typos and other minor error

    Building the Minimal Automaton of A*X in Linear Time, When X Is of Bounded Cardinality

    Get PDF
    International audienceWe present an algorithm for constructing the minimal automaton recognizing A∗X, where the pattern X is a set of m (that is a fixed integer) non-empty words over a finite alphabet A whose sum of lengths is n. This algorithm, inspired by Brzozowski's minimization algorithm, uses sparse lists to achieve a linear time complexity with respect to n

    “Maximal-munch” tokenization in linear time

    Full text link

    Dictionary matching in a stream

    Get PDF
    We consider the problem of dictionary matching in a stream. Given a set of strings, known as a dictionary, and a stream of characters arriving one at a time, the task is to report each time some string in our dictionary occurs in the stream. We present a randomised algorithm which takes O(log log(k + m)) time per arriving character and uses O(k log m) words of space, where k is the number of strings in the dictionary and m is the length of the longest string in the dictionary

    Formalising Graphical Service Descriptions using SDL

    Get PDF
    It is convenient to describe telecomms services using a graphical notation that is accessible to non-specialists. However, the notation should also have a formal interpretation for rigorous analysis. CRESS (Chisel Representation Employing Systematic Specification) has been developed for this purpose. A brief overview of CRESS is given. It is explained how features (additional services) can be defined in a modular fashion, and automatically combined with a base service. Brief case studies illustrate how the approach has been used to describe services in the IN (Intelligent Network), SIP (Session Initiation Protocol), and IVR (Interactive Voice Response). Finally, it is shown how CRESS diagrams are translated into SDL for automated simulation, validation and implementation

    Upside-Down Preference Reversal: How to Override Ceteris-Paribus Preferences?

    Full text link
    Specific preference statements may reverse general prefer-ence statements, thus constituting a change of attitude in par-ticular situations. We define a semantics of preference rever-sal by relaxing the popular ceteris-paribus principle. We char-acterize preference reversal as default reasoning and we link it to prioritized Pareto-optimization, which permits a natu-ral computation of preferred solutions. The resulting method simplifies elicitation, representation, and utilization of com-plex preference relations and may thus enable a more realistic preference handling in personalized decision support systems and in preference-based intelligent systems

    Attribute grammar evolution

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/11499305_19Proceedings of First International Work-Conference on the Interplay Between Natural and Artificial Computation, IWINAC 2005, Las Palmas, Canary Islands, Spain, June 15-18, 2005This paper describes Attribute Grammar Evolution (AGE), a new Automatic Evolutionary Programming algorithm that extends standard Grammar Evolution (GE) by replacing context-free grammars by attribute grammars. GE only takes into account syntactic restrictions to generate valid individuals. AGE adds semantics to ensure that both semantically and syntactically valid individuals are generated. Attribute grammars make it possible to semantically describe the solution. The paper shows empirically that AGE is as good as GE for a classical problem, and proves that including semantics in the grammar can improve GE performance. An important conclusion is that adding too much semantics can make the search difficult

    Computing with and without arbitrary large numbers

    Full text link
    In the study of random access machines (RAMs) it has been shown that the availability of an extra input integer, having no special properties other than being sufficiently large, is enough to reduce the computational complexity of some problems. However, this has only been shown so far for specific problems. We provide a characterization of the power of such extra inputs for general problems. To do so, we first correct a classical result by Simon and Szegedy (1992) as well as one by Simon (1981). In the former we show mistakes in the proof and correct these by an entirely new construction, with no great change to the results. In the latter, the original proof direction stands with only minor modifications, but the new results are far stronger than those of Simon (1981). In both cases, the new constructions provide the theoretical tools required to characterize the power of arbitrary large numbers.Comment: 12 pages (main text) + 30 pages (appendices), 1 figure. Extended abstract. The full paper was presented at TAMC 2013. (Reference given is for the paper version, as it appears in the proceedings.
    corecore